在几个应用程序域中获取的标签数据可能很昂贵,包括医学成像,机器人技术和计算机视觉。为了在如此高的标签成本下有效地培训机器学习模型,主动学习(AL)明智地选择了最有用的数据实例来进行贴标签。这种主动采样过程可以受益于统计函数模型,该模型通常由高斯过程(GP)捕获。尽管大多数基于GP的AL方法都依赖于单个内核函数,但目前的贡献提倡一个GP模型的集合,其权重适合于逐步收集的标记数据。在这个新颖的EGP模型的基础上,根据不确定性和分歧规则出现了一系列采集功能。还引入了基于EGP的采集功能的自适应加权合奏,以进一步鲁棒性能。关于合成和真实数据集的广泛测试展示了基于单一GP的AL替代方案所提出的基于EGP的方法的优点。
translated by 谷歌翻译
值函数近似是当状态空间大或连续时加强学习中的策略评估的重要模块。本文采用了通过时间差异(TD)学习的政策评估的生成透视,其中预先在寻求值函数上推出了高斯过程(GP),并且基于两个连续状态的价值函数评估,瞬时奖励是概率生成的。利用基于GP的随机特征的近似值,在线可扩展(OS)方法称为{OS-GPTD},以通过观察一系列状态奖励对来估计给定策略的值函数。即使在侵犯建模假设中,也在侵犯假设的情况下基准测试OS-GPTD的性能,通过上限累积的Bellman误差以及相对于他们的长期奖励预测误差来执行互补的最坏情况分析。来自固定值函数估算器的对应物,并在后智中具有整个状态奖励轨迹。此外,为了减轻与单个固定内核相关的有限的表达性,采用GP前沿的加权集合(e)来产生替代方案,称为OS-EGPTD,可以联合推断价值函数,并交互式EGP内核选择EGP内核在飞行。最后,在两个基准问题上评估了新型OS-(e)GPTD方案的性能。
translated by 谷歌翻译
要将计算负担从实时到延迟关键电源系统应用程序的脱机,最近的作品招待使用深神经网络(DNN)的想法来预测一次呈现的AC最佳功率流(AC-OPF)的解决方案负载需求。随着网络拓扑可能改变的,以样本有效的方式训练该DNN成为必需品。为提高数据效率,这项工作利用了OPF数据不是简单的训练标签,而是构成参数优化问题的解决方案。因此,我们倡导培训一个灵敏度通知的DNN(SI-DNN),不仅可以匹配OPF优化器,而且还匹配它们的部分导数相对于OPF参数(负载)。结果表明,所需的雅可比矩阵确实存在于温和条件下,并且可以从相关的原始/双解决方案中容易地计算。所提出的Si-DNN与广泛的OPF溶剂兼容,包括非凸出的二次约束的二次程序(QCQP),其半纤维程序(SDP)放松和MatPower;虽然Si-DNN可以在其他学习到OPF方案中无缝集成。三个基准电源系统的数值测试证实了SI-DNN在传统训练的DNN上预测的OPF解决方案的高级泛化和约束满意度,尤其是在低数据设置中。
translated by 谷歌翻译
我们概述了新兴机会和挑战,以提高AI对科学发现的效用。AI为行业的独特目标与AI科学的目标创造了识别模式中的识别模式与来自数据的发现模式之间的紧张。如果我们解决了与域驱动的科学模型和数据驱动的AI学习机之间的“弥补差距”相关的根本挑战,那么我们预计这些AI模型可以改变假说发电,科学发现和科学过程本身。
translated by 谷歌翻译
View-dependent effects such as reflections pose a substantial challenge for image-based and neural rendering algorithms. Above all, curved reflectors are particularly hard, as they lead to highly non-linear reflection flows as the camera moves. We introduce a new point-based representation to compute Neural Point Catacaustics allowing novel-view synthesis of scenes with curved reflectors, from a set of casually-captured input photos. At the core of our method is a neural warp field that models catacaustic trajectories of reflections, so complex specular effects can be rendered using efficient point splatting in conjunction with a neural renderer. One of our key contributions is the explicit representation of reflections with a reflection point cloud which is displaced by the neural warp field, and a primary point cloud which is optimized to represent the rest of the scene. After a short manual annotation step, our approach allows interactive high-quality renderings of novel views with accurate reflection flow. Additionally, the explicit representation of reflection flow supports several forms of scene manipulation in captured scenes, such as reflection editing, cloning of specular objects, reflection tracking across views, and comfortable stereo viewing. We provide the source code and other supplemental material on https://repo-sam.inria.fr/ fungraph/neural_catacaustics/
translated by 谷歌翻译
Modern speech recognition systems exhibits rapid performance degradation under domain shift. This issue is especially prevalent in data-scarce settings, such as low-resource languages, where diversity of training data is limited. In this work we propose M2DS2, a simple and sample-efficient finetuning strategy for large pretrained speech models, based on mixed source and target domain self-supervision. We find that including source domain self-supervision stabilizes training and avoids mode collapse of the latent representations. For evaluation, we collect HParl, a $120$ hour speech corpus for Greek, consisting of plenary sessions in the Greek Parliament. We merge HParl with two popular Greek corpora to create GREC-MD, a test-bed for multi-domain evaluation of Greek ASR systems. In our experiments we find that, while other Unsupervised Domain Adaptation baselines fail in this resource-constrained environment, M2DS2 yields significant improvements for cross-domain adaptation, even when a only a few hours of in-domain audio are available. When we relax the problem in a weakly supervised setting, we find that independent adaptation for audio using M2DS2 and language using simple LM augmentation techniques is particularly effective, yielding word error rates comparable to the fully supervised baselines.
translated by 谷歌翻译
Three main points: 1. Data Science (DS) will be increasingly important to heliophysics; 2. Methods of heliophysics science discovery will continually evolve, requiring the use of learning technologies [e.g., machine learning (ML)] that are applied rigorously and that are capable of supporting discovery; and 3. To grow with the pace of data, technology, and workforce changes, heliophysics requires a new approach to the representation of knowledge.
translated by 谷歌翻译
In this work, we propose a novel framework for estimating the dimension of the data manifold using a trained diffusion model. A trained diffusion model approximates the gradient of the log density of a noise-corrupted version of the target distribution for varying levels of corruption. If the data concentrates around a manifold embedded in the high-dimensional ambient space, then as the level of corruption decreases, the score function points towards the manifold, as this direction becomes the direction of maximum likelihood increase. Therefore, for small levels of corruption, the diffusion model provides us with access to an approximation of the normal bundle of the data manifold. This allows us to estimate the dimension of the tangent space, thus, the intrinsic dimension of the data manifold. Our method outperforms linear methods for dimensionality detection such as PPCA in controlled experiments.
translated by 谷歌翻译
Image classification with small datasets has been an active research area in the recent past. However, as research in this scope is still in its infancy, two key ingredients are missing for ensuring reliable and truthful progress: a systematic and extensive overview of the state of the art, and a common benchmark to allow for objective comparisons between published methods. This article addresses both issues. First, we systematically organize and connect past studies to consolidate a community that is currently fragmented and scattered. Second, we propose a common benchmark that allows for an objective comparison of approaches. It consists of five datasets spanning various domains (e.g., natural images, medical imagery, satellite data) and data types (RGB, grayscale, multispectral). We use this benchmark to re-evaluate the standard cross-entropy baseline and ten existing methods published between 2017 and 2021 at renowned venues. Surprisingly, we find that thorough hyper-parameter tuning on held-out validation data results in a highly competitive baseline and highlights a stunted growth of performance over the years. Indeed, only a single specialized method dating back to 2019 clearly wins our benchmark and outperforms the baseline classifier.
translated by 谷歌翻译
Dataset scaling, also known as normalization, is an essential preprocessing step in a machine learning pipeline. It is aimed at adjusting attributes scales in a way that they all vary within the same range. This transformation is known to improve the performance of classification models, but there are several scaling techniques to choose from, and this choice is not generally done carefully. In this paper, we execute a broad experiment comparing the impact of 5 scaling techniques on the performances of 20 classification algorithms among monolithic and ensemble models, applying them to 82 publicly available datasets with varying imbalance ratios. Results show that the choice of scaling technique matters for classification performance, and the performance difference between the best and the worst scaling technique is relevant and statistically significant in most cases. They also indicate that choosing an inadequate technique can be more detrimental to classification performance than not scaling the data at all. We also show how the performance variation of an ensemble model, considering different scaling techniques, tends to be dictated by that of its base model. Finally, we discuss the relationship between a model's sensitivity to the choice of scaling technique and its performance and provide insights into its applicability on different model deployment scenarios. Full results and source code for the experiments in this paper are available in a GitHub repository.\footnote{https://github.com/amorimlb/scaling\_matters}
translated by 谷歌翻译